4 research outputs found

    Relating Multimodal Imagery Data in 3D

    Get PDF
    This research develops and improves the fundamental mathematical approaches and techniques required to relate imagery and imagery derived multimodal products in 3D. Image registration, in a 2D sense, will always be limited by the 3D effects of viewing geometry on the target. Therefore, effects such as occlusion, parallax, shadowing, and terrain/building elevation can often be mitigated with even a modest amounts of 3D target modeling. Additionally, the imaged scene may appear radically different based on the sensed modality of interest; this is evident from the differences in visible, infrared, polarimetric, and radar imagery of the same site. This thesis develops a `model-centric\u27 approach to relating multimodal imagery in a 3D environment. By correctly modeling a site of interest, both geometrically and physically, it is possible to remove/mitigate some of the most difficult challenges associated with multimodal image registration. In order to accomplish this feat, the mathematical framework necessary to relate imagery to geometric models is thoroughly examined. Since geometric models may need to be generated to apply this `model-centric\u27 approach, this research develops methods to derive 3D models from imagery and LIDAR data. Of critical note, is the implementation of complimentary techniques for relating multimodal imagery that utilize the geometric model in concert with physics based modeling to simulate scene appearance under diverse imaging scenarios. Finally, the often neglected final phase of mapping localized image registration results back to the world coordinate system model for final data archival are addressed. In short, once a target site is properly modeled, both geometrically and physically, it is possible to orient the 3D model to the same viewing perspective as a captured image to enable proper registration. If done accurately, the synthetic model\u27s physical appearance can simulate the imaged modality of interest while simultaneously removing the 3-D ambiguity between the model and the captured image. Once registered, the captured image can then be archived as a texture map on the geometric site model. In this way, the 3D information that was lost when the image was acquired can be regained and properly related with other datasets for data fusion and analysis

    Multisensor Image Registration Utilizing th LoG Filter and FWT

    Get PDF
    This thesis examines the utility of automated image registration techniques developed by the author. The major thrusts of this research include using the Laplacian of Gaussian (LoG) filter to automatically determine ground control points (GCPs) and wavelet theory for multiresolution analysis. Additionally, advances in both composite and predictive transformations will be covered. The defense will include an overview of the processes involved in general image registration and specifically how they pertain to automation with the techniques utilized in this thesis. Use of the LoG filter to extract semi- invariant GCPs, development of automated point matching schemas, and the use of matrix transformations for efficient management of affine image relationships will be explained in detail. Additionally, the ability to apply statistical analysis to both local and image wide sets of GCPs will be discussed. The student developed software application, LoG Wavelet Registration (LoGWaR). will demonstrate the utility of these techniques for processing large datasets such as LANDSAT and how integration of these features can provide both power and flexibility when registering multiresolution and/or multisensor images. Automation techniques will be highlighted, demonstrating the strengths and weaknesses when applied to images with high degrees of parallax, cloud-cover, and other types of temporal change. Specific applications, such as waveletsharpening and spectral will be addressed as it pertains to current research
    corecore